Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Review of deep learning-based medical image segmentation
CAO Yuhong, XU Hai, LIU Sun'ao, WANG Zixiao, LI Hongliang
Journal of Computer Applications    2021, 41 (8): 2273-2287.   DOI: 10.11772/j.issn.1001-9081.2020101638
Abstract1737)      PDF (2539KB)(1422)       Save
As a fundamental and key task in computer-aided diagnosis, medical image segmentation aims to accurately recognize the target regions such as organs, tissues and lesions at pixel level. Different from natural images, medical images show high complexity in texture and have the boundaries difficult to judge caused by ambiguity, which is the fault of much noise due to the limitations of the imaging technology and equipment. Furthermore, annotating medical images highly depends on expertise and experience of the experts, thereby leading to limited available annotations in the training and potential annotation errors. For medical images suffer from ambiguous boundary, limited annotated data and large errors in the annotations, which makes it is a great challenge for the auxiliary diagnosis systems based on traditional image segmentation algorithms to meet the demands of clinical applications. Recently, with the wide application of Convolutional Neural Network (CNN) in computer vision and natural language processing, deep learning-based medical segmentation algorithms have achieved tremendous success. Firstly the latest research progresses of deep learning-based medical image segmentation were summarized, including the basic architecture, loss function, and optimization method of the medical image segmentation algorithms. Then, for the limitation of medical image annotated data, the mainstream semi-supervised researches on medical image segmentation were summed up and analyzed. Besides, the studies related to measuring uncertainty of the annotation errors were introduced. Finally, the characteristics summary and analysis as well as the potential future trends of medical image segmentation were listed.
Reference | Related Articles | Metrics
Weakly supervised fine-grained image classification algorithm based on attention-attention bilinear pooling
LU Xinwei, YU Pengfei, LI Haiyan, LI Hongsong, DING Wenqian
Journal of Computer Applications    2021, 41 (5): 1319-1325.   DOI: 10.11772/j.issn.1001-9081.2020071105
Abstract371)      PDF (1945KB)(1043)       Save
With the rapid development of artificial intelligence, the purpose of image classification is not only to identify the major categories of objects, but also to classify the images of the same category into more detailed subcategories. In order to effectively discriminate small differences between categories, a fine-grained classification algorithm was proposed based on Attention-Attention Bilinear Pooling (AABP). Firstly, the Inception V3 pre-training model was applied to extract the global image features, and the local attention region on the feature mapping was forecasted with the deep separable convolution. Then, the Weakly Supervised Data Augmentation Network (WS-DAN) was applied to feed the augmented image back into the network, so as to enhance the generalization ability of the network to prevent overfitting. Finally, the linear fusion of the further extracted attention features was performed in AABP network to improve the accuracy of the classification. Experimental results show that this method achieves accuracy of 88.51% and top5 accuracy of 97.65% on CUB-200-2011 dataset, accuracy of 89.77% and top5 accuracy of 99.27% on Stanford Cars dataset, and accuracy of 93.5% and top5 accuracy of 97.96% on FGVC-Aircraft dataset.
Reference | Related Articles | Metrics
Face frontalization generative adversarial network algorithm based on face feature map symmetry
LI Hongxia, QIN Pinle, YAN Hanmei, ZENG Jianchao, BAO Qianyue, CHAI Rui
Journal of Computer Applications    2021, 41 (3): 714-720.   DOI: 10.11772/j.issn.1001-9081.2020060779
Abstract603)      PDF (1432KB)(696)       Save
At present, the research of face frontalization mainly solves the face yaw problem, and pays less attention to the face frontalization of the side face affected by yaw and pitch at the same time in real scenes such as surveillance video. Aiming at this problem and the problem of incomplete identity information retained in front face image generated by multi-angle side faces, a Generative Adversarial Network (GAN) based on feature map symmetry and periocular feature preserving loss was proposed. Firstly, according to the prior of face symmetry, a symmetry module of the feature map was proposed. The face key point detector was used to detect the position of nasal tip point, and mirror symmetry was performed to the feature map extracted by the encoder according to the nasal tip, so as to alleviate the lack of facial information at the feature level. Finally, benefiting from the idea of periocular recognition, the periocular feature preserving loss was added in the existing identity preserving method of generated image to train the generator to generate realistic and identity-preserving front face image. Experimental results show that the facial details of the images generated by the proposed algorithm were well preserved, and the average Rank-1 recognition rate of faces with all angles under the pitch of CAS-PEAL-R1 dataset is 99.03%, which can effectively solve the frontalization problem of multi-angle side faces.
Reference | Related Articles | Metrics
Multipath transmission selection algorithm based on immune connectivity model
ZHANG Zhengwan, ZHANG Chunjiong, LI Hongbing, XIE Tao
Journal of Computer Applications    2020, 40 (12): 3571-3577.   DOI: 10.11772/j.issn.1001-9081.2020040492
Abstract363)      PDF (1066KB)(337)       Save
In order to solve the problems of high node energy consumption and low data transmission reliability caused by the uneven node deployment in Wireless Sensor Network (WSN), a multipath transmission selection algorithm based on immune connectivity model was proposed. When data transmission failed, the immune mechanism was used to select the fitness functions of paths, so as to optimize the transmission path and reduce the energy consumption of nodes. The experiments were performed to evaluate the proposed algorithm by the indicators such as network lifetime, end-to-end transmission delay, coverage ratio, transmission reliability and load distribution. The experimental results show that the proposed algorithm can better balance the load, improve the life cycle of network, and ensure the reliability of data transmission. The proposed algorithm can be applied to the design of sensor networks with high requirements on energy efficiency, scalability, prolonging network life and reducing network overhead.
Reference | Related Articles | Metrics
QRS complex detection algorithm of electrocardiograph based on Shannon energy and adaptive threshold
WANG Zhizhong, LI Hongyi, HAN Chuang
Journal of Computer Applications    2020, 40 (1): 304-310.   DOI: 10.11772/j.issn.1001-9081.2019050818
Abstract481)      PDF (1024KB)(222)       Save
In view of the problem that the existing QRS complex detection algorithms of electrocardiograph are still not ideal for the detection of some signal abnormalities, a QRS complex detection method combining Shannon energy with adaptive threshold was proposed to solve the problem of low accuracy of QRS complex detection. Firstly, the Shannon energy envelope was extracted from the pre-processed signal. Then, the QRS complex was detected by the improved adaptive threshold method. Finally, the location of the detected QRS complex was located according to the enhanced signal of the detected QRS complex. The MIT-BIH arrhythmia database was employed to evaluate the performance of the proposed algorithm. Results show that the algorithm can accurately detect the location of the QRS complex even when high P wave, T wave, irregular rhythm and serious noise interference exist in the signal, and has the sensitivity, positive and accuracy of the overall data detection reached 99.88%, 99.85% and 99.73% respectively, meanwhile the proposed algorithm can quickly complete the QRS complex detection task with the accuracy guaranteed.
Reference | Related Articles | Metrics
Fast stitching method for dense repetitive structure images based on grid-based motion statistics algorithm and optimal seam
MU Qi, TANG Yang, LI Zhanli, LI Hong'an
Journal of Computer Applications    2020, 40 (1): 239-244.   DOI: 10.11772/j.issn.1001-9081.2019061045
Abstract490)      PDF (999KB)(265)       Save
For the images with dense repetitive structure, the common algorithms will lead to a large number of false matches, resulting in obvious ghosting in final image and high time consumption. To solve the above problems, a fast stitching method for dense repetitive structure images was proposed based on Grid-based Motion Statistics (GMS) algorithm and optimal seam algorithm. Firstly, a large number of coarse matching points were extracted from the overlapping regions. Then, the GMS algorithm was used for precise matching, and the transformation model was estimated based on the above. Finally, the dynamic-programming-based optimal seam algorithm was adopted to complete the image stitching. The experimental results show that, the proposed method can effectively stitch images with dense repetitive structures. Not only ghosting is effectively suppressed, but also the stitching time is significantly reduced, the average stitching speed is 7.4 times and 3.2 times of the traditional Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF) algorithms respectively, 4.1 times as fast as the area-blocking-based SIFT algorithm, 1.4 times as fast as the area-blocking-based SURF algorithm. The proposed algorithm can effectively eliminate the ghosting of dense repetitive structure splicing and shorten the stitching time.
Reference | Related Articles | Metrics
Low-illumination image enhancement algorithm based on multi-scale gradient domain guided filtering
LI Hong, WANG Ruiyao, GENG Zexun, HU Haifeng
Journal of Computer Applications    2019, 39 (10): 3046-3052.   DOI: 10.11772/j.issn.1001-9081.2019040642
Abstract358)      PDF (1112KB)(320)       Save
An improved low-illumination image enhancement algorithm was proposed to solve the problems that the overall intensity of low-illumination color image is low, the color in the enhanced image is easy to be distorted, and some enhanced image details are drowned in the pixels with low gray value. Firstly, an image to be processed was converted to the Hue Saturation Intensity (HSI) color space, and the nonlinear global intensity correction was carried out for the intensity component. Then, an intensity enhancement model based on multi-scale guided gradient domain filtering was put forward to enhance the corrected intensity component, and the intensity correction was further performed to avoid color distortion. Finally, the image was converted back into Red Green Blue (RGB) color space. Experimental results show that the enhanced images have the intensity increased by more than 90.0% on average, and the sharpness increased by more than 123.8% on average, which are mainly due to the better intensity smoothing and enhancement ability of multi-scale gradient domain guided filtering. At the same time, due to the reduction of color distortion, the detail performance of enhanced images increases by more than 18.2% on average. The proposed low-illumination image enhancement algorithm is suitable for enhancing color images under night and other weak light source conditions, because of using intensity enhancement model based on multi-scale gradient domain guided filtering and histogram adaptive intensity correction algorithm.
Reference | Related Articles | Metrics
Frequent subtree mining method based on coverage patterns
XIA Ying, LI Hongxu
Journal of Computer Applications    2017, 37 (9): 2439-2442.   DOI: 10.11772/j.issn.1001-9081.2017.09.2439
Abstract436)      PDF (800KB)(519)       Save
Unordered tree is widely used for semi-structured data modeling, frequent subtrees mining on it has benefit for finding hidden knowledge. The traditional methods of mining frequent subtrees often output large-scale frequent subtrees with redundant information, such an output will reduce the efficiency of subsequent operations. In view of the shortcomings of traditional methods, the Mining CoveRage Pattern (MCRP) algorithm was proposed for mining coverage patterns. Firstly, a tree coding rule according to the tree width and the number of children was presented. Then, all candidate subtrees were generated by edge extension based on the maximum prefix coding sequence. Finally, a set of coverage patterns was output on the basis of frequent subtrees and δ'-coverage concept. Compared with the traditional algorithms for mining frequent closed tree patterns and maximal frequent tree patterns, the proposed algorithm can output fewer frequent subtrees in the case of preserving all the frequent subtrees, and the processing efficiency is increased by 15% to 25%.The experimental results show that the algorithm can effectively reduce the size and redundant information of the output frequent subtrees, and it has high feasibility in practical operation.
Reference | Related Articles | Metrics
Approach to network security situational element extraction based on parallel reduction
ZHAO Dongmei, LI Hong
Journal of Computer Applications    2017, 37 (4): 1008-1013.   DOI: 10.11772/j.issn.1001-9081.2017.04.1008
Abstract457)      PDF (930KB)(521)       Save
The quality of network security situational element extraction plays a crucial role in network security situation assessment. However, most of the existing network security situational element extraction methods rely on prior knowledge, and are not suitable for processing network security situational data. For effective and accurate extraction of network security situational elements, a parallel reduction algorithm based on matrix of attribute importance was proposed. The parallel reduction was introduced into classical rough set, then a single decision information table was expanded to multiple ones without affecting the classification. The conditional entropy was used to calculate attribute importance, and the redundant attributes were deleted according to reduction rules, thus the network security situational elements were extracted efficiently. In order to verify the efficiency of the proposed algorithm, the classification prediction was implemented on Weka. Compared with the usage of all the attributes, the classification modeling time on NSL-KDD dataset was reduced by 16.6% by using the attributes reduced by the proposed algorithm. Compared with the existing three element extraction algorithms (Genetic Algorithm (GA), Greedy Search Algorithm (GSA), and Attribute Reduction based on Conditional Entropy (ARCE) algorithm), the proposed algorithm has higher recall rate and low false positive rate. The experimental results show that the data set reduced by the proposed algorithm has better classification performance, which realizes an efficient extraction of network security situational elements.
Reference | Related Articles | Metrics
Simulation on low speed stable operation of full-order observer for induction motor
LI Hongbo JIANG Lin WANG Haitang
Journal of Computer Applications    2014, 34 (4): 1213-1216.   DOI: 10.11772/j.issn.1001-9081.2014.04.1213
Abstract599)      PDF (638KB)(356)       Save

For the low-speed instability problem of speed sensor-less vector control system of induction motor based on full-order flux observer, the unstable reason of observer at the low-speed generation region was analyzed by applying Popov's hyperstability theory, and a design criteria of feedback gain was proposed to stabilize the observer at low-speed mode. A stability analysis process was simplified based on rotor flux orientation and a multi-dimensional problem about the system poles stability was transformed into a one-dimensional problem about system zeros stability by using Routh-Hurwitz criterion. Furthermore, the stability condition of speed estimation system was derived and a design method of stability feedback gain was obtained. The simulation results show that the speed estimation system can work stably at a low speed of 50 revolutions per minute and a very low speed of 10 revolutions per minute. Compared with the traditional poles assignment approach, the system has better convergence and stability performance at low-speed generation region, and improves the dynamic and static performances of speed sensor-less vector control system at low-speed region.

Reference | Related Articles | Metrics
Personalized recommendation algorithm integrating roulette walk and combined time effect
ZHAO Ting XIAO Ruliang SUN Cong CHEN Hongtao LI Yuanxin LI Hongen
Journal of Computer Applications    2014, 34 (4): 1114-1117.   DOI: 10.11772/j.issn.1001-9081.2014.04.1114
Abstract504)      PDF (790KB)(449)       Save

The traditional graph-based recommendation algorithm neglects the combined time factor which results in the poor recommendation quality. In order to solve this problem, a personalized recommendation algorithm integrating roulette walk and combined time effect was proposed. Based on the user-item bipartite graph, the algorithm introduced attenuation function to quantize combined time factor as association probability of the nodes; Then roulette selection model was utilized to select the next target node according to those associated probability of the nodes skillfully; Finally, the top-N recommendation for each user was provided. The experimental results show that the improved algorithm is better in terms of precision, recall and coverage index, compared with the conventional PersonalRank random-walk algorithm.

Reference | Related Articles | Metrics
Chinese phrase parsing with semantic information
GENG Lifei LI Honglian LYU Xueqiang WU Yunfang
Journal of Computer Applications    2014, 34 (4): 1109-1113.   DOI: 10.11772/j.issn.1001-9081.2014.04.1109
Abstract356)      PDF (901KB)(376)       Save

To deal with the poor performance of word sense disambiguation in parsing, a Chinese phrase parsing approach was proposed based on disambiguation of Chinese part of speech. First, it expanded part of speech of TongYiCi CiLin and then substituted the original words in the training set and test set with semantics codes. In this process, it used part of speech of word for word sense disambiguation. The experimental results on Penn Chinese TreeBank (CTB) show that the proposed method achieves precision rate of 80.30%, recall rate of 78.12%, and F-measure of 79.19%. Relative to the no disambiguation system, the presented approach can effectively improve the performance of phrase parsing.

Reference | Related Articles | Metrics
Security Analysis of Range Query with Single Assertion on Encrypted Data
GU Chunsheng JING Zhengjun LI Hongwei YU Zhimin
Journal of Computer Applications    2014, 34 (4): 1019-1024.   DOI: 10.11772/j.issn.1001-9081.2014.04.1019
Abstract512)      PDF (962KB)(377)       Save

To protect users' privacy, users often transfer encrypted sensitive data to a semi-trustworthy service provider. Cai et al.(CAI K, ZHANG M, FENG D. Secure range query with single assertion on encrypted data [J]. Chinese Journal of Computers, 2011, 34(11): 2093-2103) first presented the ciphertext-only secure range query scheme with single assertion on encrypted data to prevent information leakage of users' privacy, whereas the previous schemes of range query on encrypted data were implemented through many assertions. Applying principle of trigonometric functions and matrix theory, the rank of the sensitive data was directly generated from protected interval index. Hence, this scheme was not ciphertext-only secure. To avoid this security drawback, a secure improvement scheme was constructed by introducing random element, and its complexity was analyzed.

Reference | Related Articles | Metrics
Improved algorithm for no-reference quality assessment of blurred image
LI Honglin ZHANG Qi YANG Dawei
Journal of Computer Applications    2014, 34 (3): 797-800.   DOI: 10.11772/j.issn.1001-9081.2014.03.0797
Abstract783)      PDF (629KB)(776)       Save

A fast and effective quality assessment algorithm of no-reference blurred image based on improving the classic Repeat blur (Reblur) processing algorithm was proposed for the high computational cost in traditional methods. The proposed algorithm took into account the human visual system, selected the image blocks that human was interested in instead of the entire image using the local variance, constructed blurred image blocks through low-pass filter, calculated the difference of the adjacent pixels between the original and the blurred image blocks to obtain the original image objective quality evaluation parameters. The simulation results show that compared to the traditional method, the proposed algorithm is more consistent with the subjective evaluation results with the Pearson correlation coefficient increasing 0.01 and less complex with half running time.

Related Articles | Metrics
Optimized data association algorithm based on visual simultaneous localization and mapping
ZHAO Liang CHEN Min LI Hongchen
Journal of Computer Applications    2014, 34 (2): 576-579.  
Abstract433)      PDF (612KB)(529)       Save
The scale of data association increases as the map grows, which is one of the major reasons for the poor real-time performance of robot in the process of Simultaneous Localization And Mapping (SLAM). In visual SLAM system, SIFT (Scale Invariant Feature Transform) algorithm was used to extract the natural landmarks. Two improvements were introduced to improve the real-time of data association:firstly,extracted the "interest region"; secondly,took into account the physical location of current landmarks. The experimental results indicate that this kind of improvement method is reliable, and the capability of reducing computational complexity is obvious.
Related Articles | Metrics
Collaborative filtering recommendation algorithm based on exact Euclidean locality-sensitive hashing
LI Hongmei HE Wenning CHEN Gang
Journal of Computer Applications    2014, 34 (12): 3481-3486.  
Abstract228)      PDF (937KB)(679)       Save

In recommendation systems, recommendation results are affected by the matter that rating data is characterized by large volume, high dimensionality, extreme sparsity, and the limitation of traditional similarity measuring methods in finding the nearest neighbors, including huge calculation and inaccurate results. Aiming at the poor recommendation quality, this paper presented a new collaborative filtering recommendation algorithm based on Exact Euclidean Locality-Sensitive Hashing (E2LSH). Firstly, E2LSH algorithm was utilized to lower dimensionality and construct index for large rating data. Based on the index, the nearest neighbor users of target user could be obtained with great efficiency. Then, a weighted strategy was applied to predict the user ratings to perform collaborative filtering recommendation. The experimental results on typical dataset show that the proposed method can overcome the bottleneck of high dimensionality and sparsity to some degree, with high running efficiency and good recommendation performance.

Reference | Related Articles | Metrics
Improved object detection method of adaptive Gaussian mixture model
LI Hongsheng XUE Yueju HUANG Xiaolin HUANG Ke HE Jinhui
Journal of Computer Applications    2013, 33 (09): 2610-2613.   DOI: 10.11772/j.issn.1001-9081.2013.09.2610
Abstract588)      PDF (659KB)(489)       Save
The deficiency of Gaussian Mixture Model (GMM) is the high computation cost and cannot deal with the shadow and ghosting. An improved foreground detection algorithm based on GMM is proposed in this paper. By analyzing the stability of the background, intermittent or continuous frame updating is chose to update the parameters of the GMM.It can efficiently reduce the runtime of the algorithm. In the background updating,the updating rate is associated with the weight and this makes it change with the weight.The background pixels which appear after the objects moving set a larger updating rate.It can improve the stability of the background and solve the problem of ghosting phenomenon and the transformation of background and foreground.After objects detection,the algorithm eliminates the shadow based on the RGB color space distortion model and treats the result by Gauss Pyramid filtering and morphological filtering.Through the whole process,a better contour is obtained. The experimental results show that this algorithm has improved the calculation efficiency and accurately segmented the foreground object.
Related Articles | Metrics
Matrix-based authentication protocol for RFID and BAN logic analysis
LI Hongjing LIU Dan
Journal of Computer Applications    2013, 33 (07): 1854-1857.   DOI: 10.11772/j.issn.1001-9081.2013.07.1854
Abstract779)      PDF (589KB)(530)       Save
Currently, most of proposed Radio Frequency Identification (RFID) authentication protocols cannot resist replay attack and altering attack. This article proposed a low-cost secure protocol, called Matrix-based Secure Protocol (MSP), which could resist these attacks. MSP utilized matrix-theory and Pseudo Random Number Generator (PRNG), and required only 1000 gate equivalents. Compared to previous proposed protocols using the same algorithm, MSP had less demand on the storage and the computing capability. Then, this article analyzed the security of MSP with Burrows-Abadi-Needham (BAN) logic. The conclusion is that MSP applies to RFID well.
Reference | Related Articles | Metrics
Cross domain reference monitor and its data-centered multilevel security model
LI Hongmin WAN Pingguo GE Yang
Journal of Computer Applications    2013, 33 (03): 717-719.   DOI: 10.3724/SP.J.1087.2013.00717
Abstract847)      PDF (624KB)(592)       Save
A new cross domain reference monitor and Multi-Level Security (MLS) model were proposed for a trusted MLS system. The model was based on Commercial Off-The-Shelf (COTS) products like commercial computers and security compliant hardware devices. System high networks were properly connected with reference validation computer by trusted one-way transfer devices (EAL7) for data-centric MLS model. The model allowed information to flow from low domain to high domain, and allowed sanitization data with low label to flow from high domain to low domain, but data without low label were prohibited to flow from high domain to low domain. The model was applied to the information system of classification protection. Formal verification of security model and policy demonstrates it is feasible for a MLS system with COTS products and trusted hardware devices.
Reference | Related Articles | Metrics
Shadow removal algorithm based on Gaussian mixture model
ZHANG Hongying LI Hong SUN Yigang
Journal of Computer Applications    2013, 33 (01): 31-34.   DOI: 10.3724/SP.J.1087.2013.00031
Abstract973)      PDF (637KB)(830)       Save
Shadow removal is one of the most important parts of moving object detection in the field of intelligent video since the shadow definitely affects the recognition result. In terms of the disadvantage of shadow removal methods utilizing texture, a new algorithm based on Gaussian Mixture Model (GMM) and YCbCr color space was proposed. Firstly, moving regions were detected using GMM. Secondly, the Gaussian mixture shadow model was built through analyzing the color statistics of the difference between the foreground and background of the moving regions in YCbCr color space. Lastly, the threshold value of the shadow was obtained according to the Gaussian probability distribution in YCbCr color space. More than 70 percent of shadow pixels in sequence images of the experiments could be detected by the algorithm accurately. The experimental results show that the proposed algorithm is efficient and robust in object extraction and shadow detection under different scenes.
Reference | Related Articles | Metrics
Adaptive stereo matching based on color similarity
LI Hong LI Da-hai WANG Qiong-hua CHENG Ying-feng ZHANG Chong
Journal of Computer Applications    2012, 32 (12): 3373-3376.   DOI: 10.3724/SP.J.1087.2012.03373
Abstract789)      PDF (601KB)(505)       Save
A kind of area matching method that combined weights matrix with similarity coefficient matrix was proposed in this article. The article was organized as follows: first of all, the method got the weights matrix by using color similarity and distance proximity, and the value of the matrix was corrected with an edge matrix for improving correction of the edge pixels. Then a similarity coefficient matrix was adaptively obtained according to each point pairs sum of absolute difference in matching window between left image and right image. Finally, the method was investigated by matching four stereo images (Tsukuba, Venus, Teddy, and Cones) with ground truth provided in Middlebury stereo database and the rate of overall accuracy reaches 91.82%,96.19%,76.6%,86.9%,respectively.
Related Articles | Metrics
Large-capacity dynamic multiple watermarking with tamperable localization based on Sudoku
ZHANG Li LI Hong-song YAN Xi-lan LIAN De-liang
Journal of Computer Applications    2012, 32 (11): 3129-3146.   DOI: 10.3724/SP.J.1087.2012.03129
Abstract924)      PDF (660KB)(469)       Save
In this paper the largecapacity dynamic multiple watermarking algorithm based on Sudoku was proposed with two bit/pixel embedding capacity. The original image was divided into M×N pixels nonoverlapping areas. Different watermarks with size of 2 M×N pixels were embedded into the corresponding areas. If the watermarked area was tampered, the watermark in this area could not be extracted correctly. At any time, any watermarks can be embedded into any area as long as the areas are nonoverlapping. The experimental results show that the proposed algorithm has a larger watermarking embedding capacity and higher accuracy of tampered localization.
Reference | Related Articles | Metrics
Signal sparse decomposition based on the two dictionary sets
WANG Shu-peng WANG Wen-xiang LI Hong-wei
Journal of Computer Applications    2012, 32 (09): 2512-2515.   DOI: 10.3724/SP.J.1087.2012.02512
Abstract959)      PDF (618KB)(532)       Save
A new sparse decomposition algorithm was presented to get a sparser representation of the signal. In the procedure of the algorithm, it established the two dictionary sets consisting of the selected dictionary set and the unselected dictionary set. The proposed algorithm added a more strict process which selected the best kernel from the unselected dictionary set to the original Repeated Weighted Boosting Search (RWBS), so the proposed algorithm could produce a sparser model while reserving the advantages of the original algorithm. The effectiveness of the proposed algorithm is illustrated through several examples.
Reference | Related Articles | Metrics
Methods of metadata management in block-level continuous data protection system
LI Hong-yan
Journal of Computer Applications    2012, 32 (08): 2141-2149.   DOI: 10.3724/SP.J.1087.2012.02141
Abstract845)      PDF (1044KB)(412)       Save
To effectively organize the historical data of Continuous Data Protection (CDP) systems and improve the recovery efficiency in case of the disaster occurrences, this paper presented three different methods for metadata management in CDP systems, which has important impacts on the system recovery performance. The former two (DIR-MySQL and OPT-MySQL) were simple implementations based on MySQL database and the other one (META-CDP) was designed taking its characteristics into consideration. The experimental results show that the three methods all can increase the recovery efficiency of CDP systems. MySQL based methods will get worst as the amount of recovery data increases, and the META-CDP method is not much sensitive to the increasing amount of recovery data. The META-CDP method is far more efficient than the other two and its performance is reasonably acceptable.
Reference | Related Articles | Metrics
Shadow generation algorithm of augmented reality using adaptive sampling and fusion
LI Hong-bo WU Liang-liang WU Yu
Journal of Computer Applications    2012, 32 (07): 1860-1863.   DOI: 10.3724/SP.J.1087.2012.01860
Abstract957)      PDF (595KB)(597)       Save
Since the soft shadow achieved by the existing shadow generation algorithms of Augmented Reality (AR) is unrealistic, the authors proposed a shadow generation algorithm using adaptive sampling and background fusion. First, the authors computed shadow spatial location distribution of virtual objects by using planar shadow algorithm which took occlusion into account. Then, to improve the procedure of soft shadow generation in swell and erode algorithm, an adaptive sampling method which got illuminant union according to shape types was presented. Finally, since shadow color gotten by gray image method was limited to single channel, the authors presented a method based on multi-channel and background fusion. The experimental results show that in the proposed algorithm the color of soft shadow is more reasonable and the method of soft shadow rendering is more effective. Consequently, the presented algorithm improves the realism of soft shadow.
Reference | Related Articles | Metrics
Compressed sensing-adaptive regularization for reconstruction of magnetic resonance image
LI Qing YANG Xiao-mei LI Hong
Journal of Computer Applications    2012, 32 (02): 541-544.   DOI: 10.3724/SP.J.1087.2012.00541
Abstract975)      PDF (569KB)(601)       Save
The current Magnetic Resonance (MR) image reconstruction algorithms based on compressed sensing (CS-MR) commonly use global regularization parameter, which results in the inferior reconstruction that cannot restore the image edges and smooth the noise at the same time. In order to solve this problem, based on adaptive regularization and compressed sensing, the reconstruction method that used the sparse priors and the local smooth priors of MR image in combination was proposed. Nonlinear conjugate gradient method was used for solving the optimized procedure, and the local regularization parameter was adaptively changed during the iterative process. The regularization parameter can recover the image's edge and simultaneously smooth the noise, making cost function convex within the definition region. The prior information is involved in the regularization parameter to improve the high frequency components of the image. Finally, the experimental results show that the proposed method can effectively restore the image edges and smooth the noise.
Related Articles | Metrics
Improved algorithm for point cloud data simplification
ZHU Yu KANG Bao-sheng LI Hong-an SHI Fang-ling
Journal of Computer Applications    2012, 32 (02): 521-544.   DOI: 10.3724/SP.J.1087.2012.00521
Abstract1283)      PDF (670KB)(613)       Save
Due to geometrical features always being excessively lost in Kim's simplification process of scattered point cloud, an improved simplification method was proposed. At first, principal curvatures of points in point cloud were estimated by the least square parabolic fitting. Then an error metric based on Hausdorff distance of principal curvature was used to keep and extract the feature points. Finally, through testing and analyzing some measured data with different features, the results show that the proposed method simplifies the point cloud data to a large exntent, and the simplification results are more uniform, and it can fully retain the original point cloud geometry without breaking the small features, and the quality and efficiency are both guaranteed. The method can provide effective data information for three-dimensional reconstruction to save processing time and hardware resources.
Related Articles | Metrics
Intrusion prevention system against SIP distributed flooding attacks
LI Hong-bin LIN Hu Lü Xin YANG Xue-hua
Journal of Computer Applications    2011, 31 (10): 2660-2664.   DOI: 10.3724/SP.J.1087.2011.02660
Abstract1365)      PDF (694KB)(625)       Save
According to the research of distributed SIP flooding attack detection and defense, in combination with the characteristics of IP-based distributed flood attack and SIP messages, the two-level defense architecture against SIP distributed flooding attacks (TDASDFA) was presented. Two-level defensive components made up TDASDFA logically: the First level Defense Subsystem (FDS) and the Second level Defense Subsystem (SDS). FDS coarse-grained detected and defended SIP signaling stream to filter out non-VoIP messages and discard SIP messages of the IP addresses exceeding the specified rate to ensure service availability| SDS fine-grained detected and defended SIP messages using a mitigation method based on security level to identify the cunning attacks and low-flow attacks with obvious features of malicious DoS attacks. FDS and SDS detected and defended network status in real-time together to weaken SIP distributed flooding attacks. The experimental results show that TDASDFA can detect and defend SIP distributed flooding attacks, and reduces the probability of SIP proxy server or IMS server being attacked when the network is on the abnormity.
Related Articles | Metrics
Adaptive media playout algorithm for H.264 scalable video streaming
Xiao-Feng LI Hong-sheng LIU Tong-ju RENG
Journal of Computer Applications    2011, 31 (07): 1956-1958.   DOI: 10.3724/SP.J.1087.2011.01956
Abstract964)      PDF (620KB)(918)       Save
To cope with the variation of network conditions in scalable video streaming, a new Adaptive Media Playout (AMP) algorithm was proposed which predicates the risk of playout outage and buffer overflow and adjusts the frame rate in advance. The algorithm estimated the throughput of network and the lengths of frames in the video’s GOP structure for risk predication, realized adjustments in K steps for good smoothness and speed, and reduced quality loss of the video by exploiting the scalability of SVC stream. The simulation results show that the proposed algorithm outperforms the existing smooth and conventional AMP algorithms in outage suppressing, overflow processing and jitter performance.
Reference | Related Articles | Metrics
Research of AODV routing protocol based on neighbor cache
Shi-bao LI Li HONG
Journal of Computer Applications    2011, 31 (07): 1931-1933.   DOI: 10.3724/SP.J.1087.2011.01931
Abstract1225)      PDF (631KB)(766)       Save
In Mobile Ad Hoc Networks (MANET), the routing overhead was heavy and routing latency was long by using conventional algorithms of route discovery such as flooding and Expanding Ring Search (ERS). In order to improve the performance of routing protocol, a scheme of route discovery was provided based on the neighbor cache. Neighbor information was extracted from data packets, and neighbor cache table was established to store historical neighbor records. And then the approach of route discovery included two stages: 1) Found the node meeting the destination node before a short time; 2) Started new ERS. The simulation results show that the new scheme significantly improves performance of the protocol under many kinds of simulation scenarios. The routing overhead is saved and the endtoend delay of the packet is reduced. At the same time, the new scheme is also easy to implement.
Reference | Related Articles | Metrics